40 research outputs found
Spatial frequency based video stream analysis for object classification and recognition in clouds
The recent rise in multimedia technology has made it easier to perform a number of tasks. One of these tasks is monitoring where cheap cameras are producing large amount of video data. This video data is then processed for object classification to extract useful information. However, the video data obtained by these cheap cameras is often of low quality and results in blur video content. Moreover, various illumination effects caused by lightning conditions also degrade the video quality. These effects present severe challenges for object classification. We present a cloud-based blur and illumination invariant approach for object classification from images and video data. The bi-dimensional empirical mode decomposition (BEMD) has been adopted to decompose a video frame into intrinsic mode functions (IMFs). These IMFs further undergo to first order Reisz transform to generate monogenic video frames. The analysis of each IMF has been carried out by observing its local properties (amplitude, phase and orientation) generated from each monogenic video frame. We propose a stack based hierarchy of local pattern features generated from the amplitudes of each IMF which results in blur and illumination invariant object classification. The extensive experimentation on video streams as well as publically available image datasets reveals that our system achieves high accuracy from 0.97 to 0.91 for increasing Gaussian blur ranging from 0.5 to 5 and outperforms state of the art techniques under uncontrolled conditions. The system also proved to be scalable with high throughput when tested on a number of video streams using cloud infrastructure
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
Blur-Robust Face Recognition via Transformation Learning
Abstract. This paper introduces a new method for recognizing faces degraded by blur using transformation learning on the image feature. The basic idea is to transform both the sharp images and blurred im-ages to a same feature subspace by the method of multidimensional s-caling. Different from the method of finding blur-invariant descriptors, our method learns the transformation which both preserves the mani-fold structure of the original shape images and, at the same time, en-hances the class separability, resulting in a wide applications to various descriptors. Furthermore, we combine our method with subspace-based point spread function (PSF) estimation method to handle cases of un-known blur degree, by applying the feature transformation correspond-ing to the best matched PSF, where the transformation for each PSF is learned in the training stage. Experimental results on the FERET database show the proposed method achieve comparable performance a-gainst the state-of-the-art blur-invariant face recognition methods, such as LPQ and FADEIN.
Blur invariant pattern recognition and registration in the Fourier domain
Abstract
Pattern recognition and registration are integral elements of computer vision, which considers image patterns. This thesis presents novel blur, and combined blur and geometric invariant features for pattern recognition and registration related to images. These global or local features are based on the Fourier transform phase, and are invariant or insensitive to image blurring with a centrally symmetric point spread function which can result, for example, from linear motion or out of focus.
The global features are based on the even powers of the phase-only discrete Fourier spectrum or bispectrum of an image and are invariant to centrally symmetric blur. These global features are used for object recognition and image registration. The features are extended for geometrical invariances up to similarity transformation: shift invariance is obtained using bispectrum, and rotation-scale invariance using log-polar mapping of bispectrum slices. Affine invariance can be achieved as well using rotated sets of the log-log mapped bispectrum slices. The novel invariants are shown to be more robust to additive noise than the earlier blur, and combined blur and geometric invariants based on image moments.
The local features are computed using the short term Fourier transform in local windows around the points of interest. Only the lowest horizontal, vertical, and diagonal frequency coefficients are used, the phase of which is insensitive to centrally symmetric blur. The phases of these four frequency coefficients are quantized and used to form a descriptor code for the local region. When these local descriptors are used for texture classification, they are computed for every pixel, and added up to a histogram which describes the local pattern. There are no earlier textures features which have been claimed to be invariant to blur. The proposed descriptors were superior in the classification of blurred textures compared to a few non-blur invariant state of the art texture classification methods